11 research outputs found

    Multi-class Multi-label Classification and Detection of Lumbar Intervertebral Disc Degeneration MR Images using Decision Tree Classifiers

    Get PDF
    Evidence-based medicine decision-making based on computer-aided methods is a new direction in modernhealthcare. Data Mining Techniques in Computer-Aided Diagnosis (CAD) are powerful and widely used toolsfor efficient and automated classification, retrieval, and pattern recognition of medical images. They becomehighly desirable for the healthcare providers because of the massive and increasing volume of intervertebral discdegeneration images. A fast and efficient classification and retrieval system using query images with high degreeof accuracy is vital. The method proposed in this paper for automatic detection and classification of lumbarintervertebral disc degeneration MRI-T2 images makes use of texture-based pattern recognition in data mining.A dataset of 181segmented ROIs, corresponding to 89 normal and 92 degenerated (narrowed) discs at differentvertebral level, was analyzed and textural features (contrast, entropy, and energy) were extracted from each disc-ROI. The extracted features were employed in the design of a pattern recognition system using C4.5 decisiontree classifier. The system achieved a classification accuracy of 93.33% in designing a Multi-class Multi-labelclassification system based on the affected disc position. This work combined with its higher accuracy isconsidered a valuable knowledge for orthopedists in their diagnosis of lumbar intervertebral disc degeneration inT2-weighted Magnetic Resonance sagittal Images and for automated annotation, archiving, and retrieval of theseimages for later on usage.Keywords: Data Mining, Image Processing, Lumbar Intervertebral Disc Degeneration, MRI-T2, Decision Trees,Multi-class Multi-label Classification

    Calibration of smartphone’s rear dual camera system

    Get PDF
    This paper aims to calibrate smartphone’s rear dual camera system which is composed of two lenses, namely; wide-angle lens and telephoto lens. The proposed approach handles large sized images. Calibration was done by capturing 13 photos for a chessboard pattern from different exposure positions. First, photos were captured in dual camera mode. Then, for both wide-angle and telephoto lenses, image coordinates for node points of the chessboard were extracted. Afterwards, intrinsic, extrinsic, and lens distortion parameters for each lens were calculated. In order to enhance the accuracy of the calibration model, a constrained least-squares solution was applied. The applied constraint was that the relative extrinsic parameters of both wide-angle and telephoto lenses were set as constant regardless of the exposure position. Moreover, photos were rectified in order to eliminate the effect of lens distortion. For results evaluation, two oriented photos were chosen to perform a stereo-pair intersection. Then, the node points of the chessboard pattern were used as check points

    Occluded iris classification and segmentation using self-customized artificial intelligence models and iterative randomized Hough transform

    Get PDF
    A fast and accurate iris recognition system is presented for noisy iris images, mainly the noises due to eye occlusion and from specular reflection. The proposed recognition system will adopt a self-customized support vector machine (SVM) and convolution neural network (CNN) classification models, where the models are built according to the iris texture GLCM and automated deep features datasets that are extracted exclusively from each subject individually. The image processing techniques used were optimized, whether the processing of iris region segmentation using iterative randomized Hough transform (IRHT), or the processing of the classification, where few significant features are considered, based on singular value decomposition (SVD) analysis, for testing the moving window matrix class if it is iris or non-iris. The iris segments matching techniques are optimized by extracting, first, the largest parallel-axis rectangle inscribed in the classified occluded-iris binary image, where its corresponding iris region is crosscorrelated with the same subject’s iris reference image for obtaining the most correlated iris segments in the two eye images. Finally, calculating the iriscode Hamming distance of the two most correlated segments to identify the subject’s unique iris pattern with high accuracy, security, and reliability

    Three-dimensional kidney’s stones segmentation and chemical composition detection

    Get PDF
    Kidney stones are a common and extremely painful disease and can affect any part of the urinary tract. Ultrasound and computed tomography (CT) are the most frequent imaging modalities used for patients with acute flank pain. In this paper, we design an automated system for 3D kidney segmentation and stones detection in addition to their number and size evaluation. The proposed system is built based on CT kidney image series of 10 subjects, four healthy subjects (with no stones) and the rest have stones based on medical doctor diagnosis, and its performance is tested based on 32 CT kidney series images. The designed system shows its ability to extract kidney either in abdominal or pelvis non-contrast series CT images, and it distinguishes the stones from the surrounding tissues in the kidney image, besides to its ability to analyze the stones and classify them in vivo for further medical treatment. The result agreed with medical doctor's diagnosis. The system can be improved by analyzing the stones in the laboratory and using a large CT dataset. The present method is not limited to extract stones but, also a new approach is proposed to extract the 3D kidneys as well with accuracy 99%

    Generation of Synthetic-Pseudo MR Images from Real CT Images

    No full text
    This study aimed to generate synthetic MR images from real CT images. CT# mean and standard deviation of a moving window across every pixel in the reconstructed CT images were mapped to their corresponding tissue-mimicking types. Identification of the tissue enabled remapping it to its corresponding intrinsic parameters: T1, T2, and proton density (ρ). Lastly, synthetic weighted MR images of a selected slice were generated by simulating a spin-echo sequence using the intrinsic parameters and proper contrast parameters (TE and TR). Experiments were performed on a 3D multimodality abdominal phantom and on human knees at different TE and TR parameters to confirm the clinical effectiveness of the approach. Results demonstrated the validity of the approach of generating synthetic MR images at different weightings using only CT images and the three predefined mapping functions. The slope of the fitting line and percentage root-mean-square difference (PRD) between real and synthetic image vector representations were (0.73, 10%), (0.9, 18%), and (0.2, 8.7%) for T1-, T2-, and ρ-weighted images of the phantom, respectively. The slope and PRD for human knee images, on average, were 0.89% and 18.8%, respectively. The generated MR images provide valuable guidance for physicians with regard to deciding whether acquiring real MR images is crucial

    Automated Detection of Corneal Ulcer Using Combination Image Processing and Deep Learning

    No full text
    A corneal ulcers are one of the most common eye diseases. They come from various infections, such as bacteria, viruses, or parasites. They may lead to ocular morbidity and visual disability. Therefore, early detection can reduce the probability of reaching the visually impaired. One of the most common techniques exploited for corneal ulcer screening is slit-lamp images. This paper proposes two highly accurate automated systems to localize the corneal ulcer region. The designed approaches are image processing techniques with Hough transform and deep learning approaches. The two methods are validated and tested on the publicly available SUSTech-SYSU database. The accuracy is evaluated and compared between both systems. Both systems achieve an accuracy of more than 90%. However, the deep learning approach is more accurate than the traditional image processing techniques. It reaches 98.9% accuracy and Dice similarity 99.3%. However, the first method does not require parameters to optimize an explicit training model. The two approaches can perform well in the medical field. Moreover, the first model has more leverage than the deep learning model because the last one needs a large training dataset to build reliable software in clinics. Both proposed methods help physicians in corneal ulcer level assessment and improve treatment efficiency

    Intelligent Diagnosis and Classification of Keratitis

    No full text
    A corneal ulcer is an open sore that forms on the cornea; it is usually caused by an infection or injury and can result in ocular morbidity. Early detection and discrimination between different ulcer diseases reduces the chances of visual disability. Traditional clinical methods that use slit-lamp images can be tiresome, expensive, and time-consuming. Instead, this paper proposes a deep learning approach to diagnose corneal ulcers, enabling better, improved treatment. This paper suggests two modes to classify corneal images using manual and automatic deep learning feature extraction. Different dimensionality reduction techniques are utilized to uncover the most significant features that give the best results. Experimental results show that manual and automatic feature extraction techniques succeeded in discriminating ulcers from a general grading perspective, with ~93% accuracy using the 30 most significant features extracted using various dimensionality reduction techniques. On the other hand, automatic deep learning feature extraction discriminated severity grading with a higher accuracy than type grading regardless of the number of features used. To the best of our knowledge, this is the first report to ever attempt to distinguish corneal ulcers based on their grade grading, type grading, ulcer shape, and distribution. Identifying corneal ulcers at an early stage is a preventive measure that reduces aggravation and helps track the efficacy of adapted medical treatment, improving the general public health in remote, underserved areas

    Employing Texture Features of Chest X-Ray Images and Machine Learning in COVID-19 Detection and Classification

    Get PDF
    The novel coronavirus (nCoV-19) was first detected in December 2019. It had spread worldwide and was declared coronavirus disease (COVID-19) pandemic by March 2020. Patients presented with a wide range of symptoms affecting multiple organ systems predominantly the lungs. Severe cases required intensive care unit (ICU) admissions while there were asymptomatic cases as well. Although early detection of the COVID-19 virus by Real-time reverse transcription-polymerase chain reaction (RT-PCR) is effective, it is not efficient; as there can be false negatives, it is time consuming and expensive. To increase the accuracy of in-vivo detection, radiological image-based methods like a simple chest X-ray (CXR) can be utilized. This reduces the false negatives as compared to solely using the RT-PCR technique. This paper employs various image processing techniques besides extracted texture features from the radiological images and feeds them to different artificial intelligence (AI) scenarios to distinguish between normal, pneumonia, and COVID-19 cases. The best scenario is then adopted to build an automated system that can segment the chest region from the acquired image, enhance the segmented region then extract the texture features, and finally, classify it into one of the three classes. The best overall accuracy achieved is 93.1% by exploiting Ensemble classifier. Utilizing radiological data to conform to a machine learning format reduces the detection time and increase the chances of survival
    corecore